>> % Get a random matrix >> rand_mat = rand(4,4) rand_mat = 0.95013 0.8913 0.82141 0.92181 0.23114 0.7621 0.4447 0.73821 0.60684 0.45647 0.61543 0.17627 0.48598 0.018504 0.79194 0.40571 >> % normalize each row to sum to 1 >> rowsums = sum(rand_mat, 2); >> P = diag(1./rowsums) * rand_mat P = 0.26506 0.24864 0.22915 0.25716 0.10621 0.3502 0.20435 0.33923 0.32714 0.24607 0.33177 0.095022 0.28551 0.010871 0.46526 0.23835 >> sum(P,2) % just to make sure ans = 1 1 1 1 >> % Let's consider two different initial distributions: >> alpha1 = [1 0 0 0]; >> alpha3 = [0 0 1 0]; >> >> % What is the distribution of X1 ? >> alpha1 * (P^1) ans = 0.26506 0.24864 0.22915 0.25716 >> % It got spread out a little bit. >> % What is the distribution of X20 ? >> alpha1 * (P^20) ans = 0.25414 0.21739 0.30749 0.22099 >> % What if we start at state 3? >> alpha3 * (P^20) ans = 0.25414 0.21739 0.30749 0.22099 >> % Practically the same! >> % Why is that? Let's look at P^20 >> twentystep = P^20 twentystep = 0.25414 0.21739 0.30749 0.22099 0.25414 0.21739 0.30749 0.22099 0.25414 0.21739 0.30749 0.22099 0.25414 0.21739 0.30749 0.22099 >> % All the rows are the same! So no matter which one you select >> % using alpha, you get the same answer. >> >> >> % Now, can we start with a distribution that won't be >> % changed after one step? >> % Let's take the first row of the twentystep matrix: >> alpha_s = twentystep(1,:) alpha_s = 0.25414 0.21739 0.30749 0.22099 >> % Try taking one step: >> alpha_s * P ans = 0.25414 0.21739 0.30749 0.22099 >> % Sure enough, it wasn't changed! What we have found is >> % called the stationary distribution. >> >> % Is there a way to calculate that without doing P^20 ? >> % solve: pi_vec = pi_vec * P >> % or, pi_vec - pi_vec * P = zero vector >> % or, pi_vec (I- P) = zero vector >> pi_vec = zeros(size(alpha1)) * inv( eye(size(P)) - P ) Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 3.657120e-17. pi_vec = 0 0 0 0 >> % What happened?!? Two things: >> % I-P isn't invertible: its rank is less than 4 >> rank( eye(size(P)) - P ) ans = 3 >> % And then we multiplied by a zero vector, so of course we got zeros. >> % Good thing that I-P isn't invertible, since then there would be no hope. >> >> % Here's a way to do it (only works for small MC, and a bad idea even then!) >> [e_vecs, e_vals] = eig(P'); % note the transpose! >> which_one = find(diag(e_vals) == 1); >> tmp = e_vecs(:,which_one); % the column corresponding to the eigenvalue 1 >> % the eigenvector doesn't always add up to 1, so re-normalize. >> % Also, convert to a row vector >> pi_vec = tmp' ./ sum(tmp); >> % check it! >> [pi_vec ; pi_vec * P ] ans = 0.25414 0.21739 0.30749 0.22099 0.25414 0.21739 0.30749 0.22099 >> % Sure enough, the single-step distribution is the same as the original. >> >> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >> >> >> % Now let's try it for the queueing-like chain. >> >> arr_prob = 0.4; >> svc_prob = 1-arr_prob; >> >> arr_mat = ... [0 0 0 0 0 0 0; ... 0 0 0 1 0 0 0;... 0 0 0 0 1 0 0;... 0 0 0 0 0 1 0;... 0 0 0 0 0 0 1;... 0 0 0 0 0 0 0;... 0 0 0 0 0 0 0]; >> svc_mat = [ ... 0 0 0 0 0 0 0; ... 0 0 1 0 0 0 0; ... 1 0 0 0 0 0 0; ... 0 0 0 0 1 0 0; ... 0 1 0 0 0 0 0; ... 0 0 0 0 0 0 0; ... 0 0 0 0 0 0 0]; >> ones_mat = [ ... 0 1 0 0 0 0 0; ... 0 0 0 0 0 0 0; ... 0 0 0 0 0 0 0; ... 0 0 0 0 0 0 0; ... 0 0 0 0 0 0 0; ... 0 0 0 0 0 0 1; ... 0 0 0 1 0 0 0]; >> % We are constructing the matrix this way so if we wanted >> % to change the .4 and .6 we could do it rather simply. >> % Now let's combine the three matrices into one: >> P = arr_prob * arr_mat + svc_prob * svc_mat + ones_mat P = Columns 1 through 6 0 1 0 0 0 0 0 0 0.6 0.4 0 0 0.6 0 0 0 0.4 0 0 0 0 0 0.6 0.4 0 0.6 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 Column 7 0 0 0 0 0.4 1 0 >> % What does P^20 look like? >> P^20 ans = Columns 1 through 6 0 0 0.30354 0.69646 0 0 0.18212 0 0 0 0.53929 0.27858 0 0.50581 0 0 0 0 0 0.50553 0 0 0 0 0 0 0.3034 0.6966 0 0 0 0 0.3032 0.6968 0 0 0.18192 0 0 0 0.53936 0.27872 Column 7 0 0 0.49419 0.49447 0 0 0 >> >> % Hmm, not all the rows are the same. What about >> P^200 ans = Columns 1 through 6 0 0 0.30337 0.69663 0 0 0.18202 0 0 0 0.53933 0.27865 0 0.50562 0 0 0 0 0 0.50562 0 0 0 0 0 0 0.30337 0.69663 0 0 0 0 0.30337 0.69663 0 0 0.18202 0 0 0 0.53933 0.27865 Column 7 0 0 0.49438 0.49438 0 0 0 >> % Still not the same. The problem is, the chain is periodic (period=3) >> % Let's find a pi_vec: >> [e_vecs, e_vals] = eig(P'); % note the transpose! >> which_one = find(abs(diag(e_vals)-1) < 0.00001); >> tmp = e_vecs(:,which_one); % the column corresponding to the eigenvalue 1 >> % the eigenvector doesn't always add up to 1, so re-normalize. >> % Also, convert to a row vector >> pi_vec = tmp' ./ sum(tmp); >> % check it! >> [pi_vec ; pi_vec * P ] ans = Columns 1 through 6 0.060674 0.16854 0.10112 0.23221 0.17978 0.092884 0.060674 0.16854 0.10112 0.23221 0.17978 0.092884 Column 7 0.16479 0.16479 >> % so, we found a pi_vec that works, but it isn't the limiting probabilities. >> % It still is the long-run proportion of time that the chain spends >> % in each state, though. >> % And of course it's the stationary probabilities as well.